skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Wang, Xiaojun"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The availability of large datasets of organism images combined with advances in artificial intelligence (AI) has significantly enhanced the study of organisms through images, unveiling biodiversity patterns and macro-evolutionary trends. However, existing machine learning (ML)-ready organism datasets have several limitations. First, these datasets often focus on species classification only, overlooking tasks involving visual traits of organisms. Second, they lack detailed visual trait annotations, like pixel-level segmentation, that are crucial for in-depth biological studies. Third, these datasets predominantly feature organisms in their natural habitats, posing challenges for aquatic species like fish, where underwater images often suffer from poor visual clarity, obscuring critical biological traits. This gap hampers the study of aquatic biodiversity patterns which is necessary for the assessment of climate change impacts, and evolutionary research on aquatic species morphology. To address this, we introduce the Fish-Visual Trait Analysis (Fish-Vista) dataset—a large, annotated collection of about 80K fish images spanning 3000 different species, supporting several challenging and biologically relevant tasks including species classification, trait identification, and trait segmentation. These images have been curated through a sophisticated data processing pipeline applied to a cumulative set of images obtained from various museum collections. Fish-Vista ensures that visual traits of images are clearly visible, and provides fine-grained labels of various visual traits present in each image. It also offers pixel-level annotations of 9 different traits for about 7000 fish images, facilitating additional trait segmentation and localization tasks. The ultimate goal of Fish-Vista is to provide a clean, carefully curated, high-resolution dataset that can serve as a foundation for accelerating biological discoveries using advances in AI. Finally, we provide a comprehensive analysis of state-of-the-art deep learning techniques on Fish-Vista. 
    more » « less
    Free, publicly-accessible full text available June 15, 2026
  2. A new science discipline has emerged within the last decade at the intersection of informatics, computer science and biology:Imageomics. Like most other -omics fields, Imageomics also uses emerging technologies to analyze biological data but from the images. One of the most applied data analysis methods for image datasets is Machine Learning (ML). In 2019, we started working on a United States National Science Foundation (NSF) funded project, known as Biology Guided Neural Networks (BGNN) with the purpose of extracting information about biology by using neural networks and biological guidance such as species descriptions, identifications, phylogenetic trees and morphological annotations (Bart et al. 2021). Even though the variety and abundance of biological data is satisfactory for some ML analysis and the data are openly accessible, researchers still spend up to 80% of their time preparing data into a usable, AI-ready format, leaving only 20% for exploration and modeling (Long and Romanoff 2023). For this reason, we have built a dataset composed of digitized fish specimens, taken either directly from collections or from specialized repositories. The range of digital representations we cover is broad and growing, from photographs and radiographs, to CT scans, and even illustrations. We have added new groups of vocabularies to the dataset management system including image quality metadata, extended image metadata and batch metadata. With the image quality metadata and extended image metadata, we aimed to extract information from the digital objects that can possibly help ML scientists in their research with filtering, image processing and object recognition routines. Image quality metadata provides information about objects contained in the image, features and condition of the specimen, and some basic visual properties of the image, while extended image metadata provides information about technical properties of the digital file and the digital multimedia object (Bakış et al. 2021, Karnani et al. 2022, Leipzig et al. 2021, Pepper et al. 2021, Wang et al. 2021) (see details on Fish-AIR vocabulary web page). Batch metadata is used for separating different datasets and facilitates downloading and uploading data in batches with additional batch information and supplementary files. Additional flexibility, built into the database infrastructure using an RDF framework, will enable the system to host different taxonomic groups, which might require new metadata features (Jebbia et al. 2023). By the combination of these features, along with FAIR (Findable, Accessable, Interoperable, Reusable) principles, and reproducibility, we provide Artificial Intelligence Readiness (AIR; Long and Romanoff 2023) to the dataset. Fish-AIR provides an easy-to-access, filtered, annotated and cleaned biological dataset for researchers from different backgrounds and facilitates the integration of biological knowledge based on digitized preserved specimens into ML pipelines. Because of the flexible database infrastructure and addition of new datasets, researchers will also be able to access additional types of data—such as landmarks, specimen outlines, annotated parts, and quality scores—in the near future. Already, the dataset is the largest and most detailed AI-ready fish image dataset with integrated Image Quality Management System (Jebbia et al. 2023, Wang et al. 2021). 
    more » « less
  3. Abstract CO2electroreduction (CO2R) operating in acidic media circumvents the problems of carbonate formation and CO2crossover in neutral/alkaline electrolyzers. Alkali cations have been universally recognized as indispensable components for acidic CO2R, while they cause the inevitable issue of salt precipitation. It is therefore desirable to realize alkali‐cation‐free CO2R in pure acid. However, without alkali cations, stabilizing *CO2intermediates by catalyst itself at the acidic interface poses as a challenge. Herein, we first demonstrate that a carbon nanotube‐supported molecularly dispersed cobalt phthalocyanine (CoPc@CNT) catalyst provides the Co single‐atom active site with energetically localizeddstates to strengthen the adsorbate‐surface interactions, which stabilizes *CO2intermediates at the acidic interface (pH=1). As a result, we realize CO2conversion to CO in pure acid with a faradaic efficiency of 60 % at pH=2 in flow cell. Furthermore, CO2is successfully converted in cation exchanged membrane‐based electrode assembly with a faradaic efficiency of 73 %. For CoPc@CNT, acidic conditions also promote the intrinsic activity of CO2R compared to alkaline conditions, since the potential‐limiting step, *CO2to *COOH, is pH‐dependent. This work provides a new understanding for the stabilization of reaction intermediates and facilitates the designs of catalysts and devices for acidic CO2R. 
    more » « less
  4. Adam, N.; Neuhold, E.; Furuta, R. (Ed.)
    Metadata is a key data source for researchers seeking to apply machine learning (ML) to the vast collections of digitized biological specimens that can be found online. Unfortunately, the associated metadata is often sparse and, at times, erroneous. This paper extends previous research conducted with the Illinois Natural History Survey (INHS) collection (7244 specimen images) that uses computational approaches to analyze image quality, and then automatically generates 22 metadata properties representing the image quality and morphological features of the specimens. In the research reported here, we demonstrate the extension of our initial work to University of the Wisconsin Zoological Museum (UWZM) collection (4155 specimen images). Further, we enhance our computational methods in four ways: (1) augmenting the training set, (2) applying contrast enhancement, (3) upscaling small objects, and (4) refining our processing logic. Together these new methods improved our overall error rates from 4.6 to 1.1%. These enhancements also allowed us to compute an additional set of 17 image-based metadata properties. The new metadata properties provide supplemental features and information that may also be used to analyze and classify the fish specimens. Examples of these new features include convex area, eccentricity, perimeter, skew, etc. The newly refined process further outperforms humans in terms of time and labor cost, as well as accuracy, providing a novel solution for leveraging digitized specimens with ML. This research demonstrates the ability of computational methods to enhance the digital library services associated with the tens of thousands of digitized specimens stored in open-access repositories world-wide by generating accurate and valuable metadata for those repositories. 
    more » « less
  5. We have been successfully developing Artificial Intelligence (AI) models for automatically classifying fish species using neural networks over the last three years during the “Biology Guided Neural Network” (BGNN) project*1. We continue our efforts in another broader project, “Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning”*2. One of the main topics in the Imageomics Project is “Morphological Barcoding”. Within the Morphological Barcoding study, we are trying to build a gold standard method to identify species in different taxonomic groups based on their external morphology. This list of characters will contain, but not be limited to, landmarks, quantitative traits such as measurements of distances, areas, angles, proportions, colors, histograms, patterns, shapes, and outlines. The taxonomic groups will be limited by the data available, and we will be using fish as the topic of interest in this preliminary study. In this current study, we have focused on extracting morphological characters that are relying on anatomical features of fish, such as location of the eye, body length, and area of the head. We developed a schematic workflow to describe how we processed the data and extract the information (Fig. 1). We performed our analysis on the segmented images produced by Karpatne Team within the BGNN project (Bart et al. 2021). Segmentation analysis was performed using Artificial Neural Networks - Semantic Segmentation (Long et al. 2015); the list of segments to be detected were given as eye, head, trunk, caudal fin, pectoral fin, dorsal fin, anal fin, pelvic fin for fish. Segmented images, metadata and species lists were given as input to the workflow. During the cleaning and filtering subroutines, a subset of data was created by filtering down to the desired segmented images with corresponding metadata. In the validation step, segmented images were checked by comparing the number of specimens in the original image to the separate bounding-boxed specimen images, noting: violations in the segmentations, counts of segments, comparisons of relative positions of the segments among one another, traces of batch effect; comparisons according to their size and shape and finally based on these validation criteria each segmented image was assigned a score from 1 to 5 similar to Adobe XMP Basic namespace. The landmarks and the traits to be used in the study were extracted from the current literature, while mindful that some of the features may not be extracted successfully computationally. By using the landmark list, landmarks have been extracted by adapting the descriptions from the literature on to the segments, such as picking the left most point on the head as the tip of snout and top left point on the pelvic fin as base of the pelvic fin. These 2D vectors (coordinates), are then fine tuned by adjusting their positions to be on the outline of the fish, since most of the landmarks are located on the outline. Procrustes analysis*3 was performed to scale all of the measurements together and point clouds were generated. These vectors were stored as landmark data. Segment centroids were also treated as landmarks. Extracted landmarks were validated by comparing their relative position among each other, and then if available, compared with their manually captured position. A score was assigned based on these comparisons, similar to the segmentation validation score. Based on the trait list definitions, traits were extracted by measuring the distances between two landmarks, angles between three landmarks, areas between three or more landmarks, areas of the segments, ratios between two distances or areas and between a distance and a square rooted area and then stored as trait data. Finally these values were compared within their own species clusters for errors and whether the values were still within the bounds. Trait scores were calculated from these error calculations similar to segmentation scores aiming selecting good quality scores for further analysis such as Principal Component Analysis. Our work on extraction of features from segmented digital specimen images has shown that the accuracy of the traits such as measurements, areas, and angles depends on the accuracy of the landmarks. Accuracy of the landmarks is highly dependent on segmentation of the parts of the specimen. The landmarks that are located on the outline of the body (combination of head and trunk segments of the fish) are found to be more accurate comparing to the landmarks that represents inner features such as mouth and pectoral fin in some taxonomic groups. However, eye location is almost always accurate, since it is based on the centroid of the eye segment. In the remaining part of this study we will improve the score calculation for segments, images, landmarks and traits and calculate the accuracy of the scores by comparing the statistical results obtained by analysis of the landmark and trait data. 
    more » « less
  6. Conference Title: 2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL) Conference Start Date: 2021, Sept. 27 Conference End Date: 2021, Sept. 30 Conference Location: Champaign, IL, USAMetadata are key descriptors of research data, particularly for researchers seeking to apply machine learning (ML) to the vast collections of digitized specimens. Unfortunately, the available metadata is often sparse and, at times, erroneous. Additionally, it is prohibitively expensive to address these limitations through traditional, manual means. This paper reports on research that applies machine-driven approaches to analyzing digitized fish images and extracting various important features from them. The digitized fish specimens are being analyzed as part of the Biology Guided Neural Networks (BGNN) initiative, which is developing a novel class of artificial neural networks using phylogenies and anatomy ontologies. Automatically generated metadata is crucial for identifying the high-quality images needed for the neural network's predictive analytics. Methods that combine ML and image informatics techniques allow us to rapidly enrich the existing metadata associated with the 7,244 images from the Illinois Natural History Survey (INHS) used in our study. Results show we can accurately generate many key metadata properties relevant to the BGNN project, as well as general image quality metrics (e.g. brightness and contrast). Results also show that we can accurately generate bounding boxes and segmentation masks for fish, which are needed for subsequent machine learning analyses. The automatic process outperforms humans in terms of time and accuracy, and provides a novel solution for leveraging digitized specimens in ML. This research demonstrates the ability of computational methods to enhance the digital library services associated with the tens of thousands of digitized specimens stored in open-access repositories worldwide. 
    more » « less
  7. Abstract Image‐based machine learning tools are an ascendant ‘big data’ research avenue. Citizen science platforms, like iNaturalist, and museum‐led initiatives provide researchers with an abundance of data and knowledge to extract. These include extraction of metadata, species identification, and phenomic data. Ecological and evolutionary biologists are increasingly using complex, multi‐step processes on data. These processes often include machine learning techniques, often built by others, that are difficult to reuse by other members in a collaboration.We present a conceptual workflow model for machine learning applications using image data to extract biological knowledge in the emerging field of imageomics. We derive an implementation of this conceptual workflow for a specific imageomics application that adheres to FAIR principles as a formal workflow definition that allows fully automated and reproducible execution, and consists of reusable workflow components.We outline technologies and best practices for creating an automated, reusable and modular workflow, and we show how they promote the reuse of machine learning models and their adaptation for new research questions. This conceptual workflow can be adapted: it can be semi‐automated, contain different components than those presented here, or have parallel components for comparative studies.We encourage researchers—both computer scientists and biologists—to build upon this conceptual workflow that combines machine learning tools on image data to answer novel scientific questions in their respective fields. 
    more » « less